Free diffusions and Matrix models with strictly convex interaction

نویسندگان

  • Alice Guionnet
  • D. Shlyakhtenko
چکیده

We study solutions to the free stochastic differential equation dXt = dSt − 1 2DV (Xt)dt, where V is a locally convex polynomial potential in m non-commuting variables. We show that for self-adjoint V , the law μV of a stationary solution is the limit law of a random matrix model, in which an m-tuple of self-adjoint matrices are chosen according to the law exp(−NTr(V (A1, . . . , Am)))dA1 · · · dAm. We show that if V = Vβ depends on complex parameters β1, . . . , βk, then the law μV is analytic in β at least for those β for which Vβ is locally convex. In particular, this gives information on the region of convergence of the generating function for planar maps. We prove that the solution dXt has nice convergence properties with respect to the operator norm as t goes to infinity. This allows us to show that the C∗ and W ∗ algebras generated by an m-tuple with law μV share many properties with those generated by a semi-circular system. Among them is lack of projections, exactness, the Haagerup property, and embeddability into the ultrapower of the hyperfinite II1 factor. We show that the microstates free entropy χ(τV ) is finite. A corollary of these results is the fact that the support of the law of any self-adjoint polynomial in X1, . . . , Xn under the law μV is connected, vastly generalizing the case of a single random matrix. We also deduce from this dynamical approach that the convergence of the operator norms of independent matrices from the GUE proved by Haagerup and S. Thorjornsen [14] extends to the context of matrices interacting via a convex potential.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Weighted composition operators between growth spaces on circular and strictly convex domain

Let $Omega_X$ be a bounded, circular and strictly convex domain of a Banach space $X$ and $mathcal{H}(Omega_X)$ denote the space of all holomorphic functions defined on $Omega_X$. The growth space $mathcal{A}^omega(Omega_X)$ is the space of all $finmathcal{H}(Omega_X)$ for which $$|f(x)|leqslant C omega(r_{Omega_X}(x)),quad xin Omega_X,$$ for some constant $C>0$, whenever $r_{Omega_X}$ is the M...

متن کامل

A Semidefinite Optimization Approach to Quadratic Fractional Optimization with a Strictly Convex Quadratic Constraint

In this paper we consider a fractional optimization problem that minimizes the ratio of two quadratic functions subject to a strictly convex quadratic constraint. First using the extension of Charnes-Cooper transformation, an equivalent homogenized quadratic reformulation of the problem is given. Then we show that under certain assumptions, it can be solved to global optimality using semidefini...

متن کامل

Some Particular Self-interacting Diffusions: Ergodic Behavior and Almost Sure Convergence Sébastien Chambeu and Aline Kurtzmann

This paper deals with some self-interacting diffusions (Xt, t ≥ 0) living on R. These diffusions are solutions to stochastic differential equations: dXt = dBt − g(t)∇V (Xt − μt)dt, where μt is the mean of the empirical measure of the process X , V is an asymptotically strictly convex potential and g is a given function. We study the ergodic behavior of X and prove that it is strongly related to...

متن کامل

Convergence in Distribution of Some Particular Self-interacting Diffusions: the Simulated Annealing Method

The present paper is concerned with some self-interacting diffusions (Xt, t ≥ 0) living on R. These diffusions are solutions to stochastic differential equations: dXt = dBt − g(t)∇V (Xt − μt)dt where μt is the empirical mean of the process X, V is an asymptotically strictly convex potential and g is a given function. The authors have still studied the ergodic behavior of X and proved that it is...

متن کامل

A Recurrent Neural Network for Solving Strictly Convex Quadratic Programming Problems

In this paper we present an improved neural network to solve strictly convex quadratic programming(QP) problem. The proposed model is derived based on a piecewise equation correspond to optimality condition of convex (QP) problem and has a lower structure complexity respect to the other existing neural network model for solving such problems. In theoretical aspect, stability and global converge...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007